Goto

Collaborating Authors

 interpretable algorithm


Why employees are more likely to second-guess interpretable algorithms

#artificialintelligence

More and more, workers are presented with algorithms to help them make better decisions. But humans must trust those algorithms to follow their advice. The way humans view algorithmic recommendations varies depending on how much they know about how the model works and how it was created, according to a new research paper co-authored by MIT Sloan professorKate Kellogg. Prior research has assumed that people are more likely to trust interpretable artificial intelligence models, in which they are able to see how the models make their recommendations. But Kellogg and co-researchers Tim DeStefano, Michael Menietti, and Luca Vendraminelli, affiliated with the Laboratory for Innovation Science at Harvard, found that this isn't always true.


Four interpretable algorithms that you should use in 2022

#artificialintelligence

The new year has begun, and it is the time for good resolutions. One of them could be to make decision-making processes more interpretable. To help you do this, I present four interpretable rule-based algorithms. These four algorithms share the use of ensemble of decision trees as rule generator (like Random Forest, AdaBoost, Gradient Boosting, etc.). In other words, each of these interpretable algorithms starts its process by fitting a black box model and generating an interpretable rule ensemble model.


An Interpretable Algorithm for Uveal Melanoma Subtyping from Whole Slide Cytology Images

Chen, Haomin, Liu, T. Y. Alvin, Gomez, Catalina, Correa, Zelia, Unberath, Mathias

arXiv.org Artificial Intelligence

Algorithmic decision support is rapidly becoming a staple of personalized medicine, especially for high-stakes recommendations in which access to certain information can drastically alter the course of treatment, and thus, patient outcome; a prominent example is radiomics for cancer subtyping. Because in these scenarios the stakes are high, it is desirable for decision systems to not only provide recommendations but supply transparent reasoning in support thereof. For learning-based systems, this can be achieved through an interpretable design of the inference pipeline. Herein we describe an automated yet interpretable system for uveal melanoma subtyping with digital cytology images from fine needle aspiration biopsies. Our method embeds every automatically segmented cell of a candidate cytology image as a point in a 2D manifold defined by many representative slides, which enables reasoning about the cell-level composition of the tissue sample, paving the way for interpretable subtyping of the biopsy. Finally, a rule-based slide-level classification algorithm is trained on the partitions of the circularly distorted 2D manifold. This process results in a simple rule set that is evaluated automatically but highly transparent for human verification. On our in house cytology dataset of 88 uveal melanoma patients, the proposed method achieves an accuracy of 87.5% that compares favorably to all competing approaches, including deep "black box" models. The method comes with a user interface to facilitate interaction with cell-level content, which may offer additional insights for pathological assessment.


Considerations Across Three Cultures: Parametric Regressions, Interpretable Algorithms, and Complex Algorithms

Eloyan, Ani, Rose, Sherri

arXiv.org Machine Learning

The relevance of the themes presented in "Statistical Modeling: The Two Cultures" by Leo Breiman remains twenty years later. While we could consider many categorizations of statistics culture, we posit that at least three cultures have emerged. We still have the data modeling group with regressions defined within parametric models, but the algorithmic modeling culture has, at a minimum, bifurcated with interpretable algorithms and (possibly explainable) complex algorithms [Rudin, 2019]. Practitioners of algorithmic modeling may develop interpretable or complex algorithms or both, depending on the needs of the scientific question. As empirically driven statisticians, we would prefer to collect data on this before putting forth a supposition, but, in lieu of such data, we also surmise that the algorithmic modeling culture has grown larger over time than Breiman's proposed 2% of statisticians. In this commentary, we remark on several areas of increasing concern for statisticians--in all three cultures--working to solve real, substantive problems.


A rigorous method to compare interpretability of rule-based algorithms

Margot, Vincent

arXiv.org Machine Learning

Interpretability is becoming increasingly important in predictive model analysis. Unfortunately, as mentioned by many authors, there is still no consensus on that idea. The aim of this article is to propose a rigorous mathematical definition of the concept of interpretability, allowing fair comparisons between any rule-based algorithms. This definition is built from three notions, each of which being quantitatively measured by a simple formula: predictivity, stability and simplicity. While predictivity has been widely studied to measure the accuracy of predictive algorithms, stability is based on the Dice-Sorensen index to compare two sets of rules generated by an algorithm using two independent samples. Simplicity is based on the sum of the length of the rules deriving from the generated model. The final objective measure of the interpretability of any rule-based algorithm ends up as a weighted sum of the three aforementioned concepts. This paper concludes with the comparison of the interpretability between four rule-based algorithms.